视频可访问性对于盲人和低愿景用户来说至关重要,以获得教育,就业和娱乐的公平参与。尽管有专业和业余服务和工具,但大多数人类生成的描述都很昂贵且耗时。此外,人生成的描述的速率不能匹配视频产生的速度。为了克服视频可访问性的越来越多的空白,我们开发了两个工具的混合系统到1)自动生成视频的描述,2)提供响应于视频上的用户查询的答案或附加描述。与26例盲和低视力下的混合方法研究结果表明,当两种工具在串联中使用时,我们的系统会显着提高用户理解和享受所选视频的理解和享受。此外,参与者报告说,在呈现自生物的描述与人类修订的自动化描述相关时,没有显着差异。我们的结果表明了对发达系统的热情及其承诺提供对视频的定制访问。我们讨论了当前工作的局限性,并为自动视频描述工具的未来发展提供了建议。
translated by 谷歌翻译
Multi-object tracking is a cornerstone capability of any robotic system. Most approaches follow a tracking-by-detection paradigm. However, within this framework, detectors function in a low precision-high recall regime, ensuring a low number of false-negatives while producing a high rate of false-positives. This can negatively affect the tracking component by making data association and track lifecycle management more challenging. Additionally, false-negative detections due to difficult scenarios like occlusions can negatively affect tracking performance. Thus, we propose a method that learns shape and spatio-temporal affinities between consecutive frames to better distinguish between true-positive and false-positive detections and tracks, while compensating for false-negative detections. Our method provides a probabilistic matching of detections that leads to robust data association and track lifecycle management. We quantitatively evaluate our method through ablative experiments and on the nuScenes tracking benchmark where we achieve state-of-the-art results. Our method not only estimates accurate, high-quality tracks but also decreases the overall number of false-positive and false-negative tracks. Please see our project website for source code and demo videos: sites.google.com/view/shasta-3d-mot/home.
translated by 谷歌翻译